33 research outputs found

    Generating complete all-day activity plans with genetic algorithms

    Get PDF
    Activity-based demand generation contructs complete all-day activity plans for each member of a population, and derives transportation demand from the fact that consecutive activities at different locations need to be connected by travel. Besides many other advantages, activity-based demand generation also fits well into the paradigm of multi-agent simulation, where each traveler is kept as an individual throughout the whole modeling process. In this paper, we present a new approach to the problem, which uses genetic algorithms (GA). Our GA keeps, for each member of the population, several instances of possible all-day activity plans in memory. Those plans are modified by mutation and crossover, while 'bad' instances are eventually discarded. Any GA needs a fitness function to evaluate the performance of each instance. For all-day activity plans, it makes sense to use a utility function to obtain such fitness. In consequence, a significant part of the paper is spent discussing such a utility function. In addition, the paper shows the performance of the algorithm to a few selected problems, including very busy and rather non-busy days

    Location choice for a continuous simulation of long periods under changing conditions

    Get PDF
    JTLU vol. 7, no. 2, pp. 85-103 (2014)The authors propose a location choice procedure that is capable of handling changing conditions of aspects with different time horizons. It integrates expected travel time, current location effectiveness, prospective location effectiveness, and individual unexplained location perception into a decision heuristic that considers different planning horizons simultaneously and decides on-the-fly about future location visits. Multiple simulation runs illustrate agents' location choice behavior in various situations and confirm that the model enables agents to simultaneously consider seasonal effects, weather conditions, expected travel times, and individual unexplained location preference in their location choice

    Agent-based model for continuous activity planning with an open planning horizon

    Get PDF
    The paper proposes the microscopic travel demand model continuous target-based activity planning (C-TAP) that generates multi-week schedules by means of a continuous planning approach with an open planning horizon. C-TAP introduces behavioral targets to describe people's motivation to perform activities, and it uses a planning heuristic to make on-the-fly decisions about upcoming activities. The planning heuristic bases its decisions on three aspects: a discomfort index derived from deviations from agents' past performance with regard to their behavioral targets; the effectiveness of the immediate execution; and activity execution options available in the near future. The paper reports the results of a test scenario based on an existing 6-week continuous travel diary and validates C-TAP by comparing simulation results with observed behavioral patterns along several dimensions (weekday similarities, weekday execution probabilities of activities, transition probabilities between activities, duration distributions of activities, frequency distributions of activities, execution interval distributions of activities and weekly travel probability distributions). The results show that C-TAP has the capability to reproduce observed behavior and the flexibility to introduces new behavioral patterns

    MATSim-T : Architecture and Simulation Times

    Get PDF
    Micro-simulations for transport planning are becoming increasingly important in traffic simulation, traffic analysis, and traffic forecasting. In the last decades the shift from using typically aggregated data to more detailed, individual based, complex data (e.g. GPS tracking) andthe continuously growing computer performance on fixed price level leads to the possibility of using microscopic models for large scale planning regions. This chapter presents such a micro-simulation. The work is part of the research project MATSim (Multi Agent Transport Simulation, http://matsim.org). In the chapter here the focus lies on design and implementation issues as well as on computational performance of different parts of the system. Based on a study of Swiss daily traffic – ca. 2.3 million individuals using motorized individual transport producing about 7.1 million trips, assigned to a Swiss network model with about 60,000 links, simulated and optimized completely time-dynamic for a complete workday – it is shown that the system is able to generate those traffic patterns in about 36 hours computation time

    MATSim-T

    Get PDF
    Micro-simulations for transport planning are becoming increasingly important in traffic simulation, traffic analysis, and traffic forecasting. In the last decades the shift from using typically aggregated data to more detailed, individual based, complex data (e.g. GPS tracking) andthe continuously growing computer performance on fixed price level leads to the possibility of using microscopic models for large scale planning regions. This chapter presents such a micro-simulation. The work is part of the research project MATSim (Multi Agent Transport Simulation, http://matsim.org). In the chapter here the focus lies on design and implementation issues as well as on computational performance of different parts of the system. Based on a study of Swiss daily traffic ca. 2.3 million individuals using motorized individual transport producing about 7.1 million trips, assigned to a Swiss network model with about 60,000 links, simulated and optimized completely time-dynamic for a complete workday it is shown that the system is able to generate those traffic patterns in about 36 hours computation time. Document type: Part of book or chapter of boo

    PISim: A parallelization framework for interaction simulations

    No full text
    This paper presents PISim, a Parallelization framework for Interaction Simulations. PISim can be used to simulate models of short ranged object interactios in space on high-performance computers. By using PISim, users can avoid the difficulties and complexity usually involved in porting sequential sourcecode to parallel machines. The framework achieves this by combining severals fast and simple elements: First, it uses an efficient representation of space (a helical array) which exploits temporal continuity, second, it uses a straightforward way of communica- tion between processors by means of message passing, third, it arranges simulation domains in cell clusters that are treated as individual objects, fourth, it monitors the computation times of each cluster continuously and uses model based control to grow or shrink the clusters as needed to balance the load between processors, and fifth, it encapsulates and hides all this complexity from the user by using programming templates to provide a slim interface for end users. This small interface makes it possible to adopt the framework easily to new problems. It quickens development cyles and therby enables users not familiar with parallel programming techniques to adapt rapid prototyping paradigms

    Q-learning for flexible learning of daily activity plans

    No full text
    Q-learning is a method from artificial intelligence to solve the reinforcement learning problem (RLP), defined as follows. An agent is faced with a set of states, S. For each state s there is a set of actions, A(s), that the agent can take and that takes the agent (deterministically or stochastically) to another state. For each state the agent receives a (possibly stochastic) reward. The task is to select actions such that the reward is maximized. Activity generation is for demand generation in the context of transportation simulation. For each member of a synthetic population, a daily activity plan stating a sequence of activities (e.g., home-work-shop-home), including locations and times, needs to be found. Activities at different locations generate demand for transportation. Activity generation can be modeled as an RLP with the states given by the triple (type of activity, starting time of activity, time already spent at activity). The possible actions are either to stay at a given activity or to move to another activity. Rewards are given as "utility per time slice," which corresponds to a coarse version of marginal utility. Q-learning has the property that, by repeating similar experiences over and over again, the agent looks forward in time; that is, the agent can also go on paths through state space in which high rewards are given only at the end. This paper presents computational results with such an algorithm for daily activity planning.ISSN:0361-1981ISSN:2169-405
    corecore